664 research outputs found

    Grad-CAM++: Improved Visual Explanations for Deep Convolutional Networks

    Full text link
    Over the last decade, Convolutional Neural Network (CNN) models have been highly successful in solving complex vision problems. However, these deep models are perceived as "black box" methods considering the lack of understanding of their internal functioning. There has been a significant recent interest in developing explainable deep learning models, and this paper is an effort in this direction. Building on a recently proposed method called Grad-CAM, we propose a generalized method called Grad-CAM++ that can provide better visual explanations of CNN model predictions, in terms of better object localization as well as explaining occurrences of multiple object instances in a single image, when compared to state-of-the-art. We provide a mathematical derivation for the proposed method, which uses a weighted combination of the positive partial derivatives of the last convolutional layer feature maps with respect to a specific class score as weights to generate a visual explanation for the corresponding class label. Our extensive experiments and evaluations, both subjective and objective, on standard datasets showed that Grad-CAM++ provides promising human-interpretable visual explanations for a given CNN architecture across multiple tasks including classification, image caption generation and 3D action recognition; as well as in new settings such as knowledge distillation.Comment: 17 Pages, 15 Figures, 11 Tables. Accepted in the proceedings of IEEE Winter Conf. on Applications of Computer Vision (WACV2018). Extended version is under review at IEEE Transactions on Pattern Analysis and Machine Intelligenc

    Productivity improvement though OEE measurement : a TPM case study for meat processing plant in Australia

    Get PDF
    Fluctuating demands and increased competition in Australia and Asian countries have been putting more pressure on plants for packaged meat products in Australia. Total Productive Maintenance (TPM) was seen a solution and is currently being implemented within a major meat processing facility in Melbourne, Australia for achieving high Overall Equipment Effectiveness (OEE). Concerns were raised by board of directors due to OEE targets not meant. TPM was initially applied in key areas of the business, thermoforming and packaging for reducing wastes and further enhancing productivity and quality. It is now being rolled out to other sections of the plant. Data collected from fifty-two weeks of production has been analysed and recommendations made to achieve OEE targets for the R145 production line. Risk based maintenance was applied to control adverse effects of packaging quality which significantly influences shelf life. Shelf life of a modified atmosphere packaged product assures safety for consumption of meat products by consumers. Risk based maintenance considered asset failure probabilities, impacts on quality and availability of spare parts. Reliability Centred Maintenance (RCM) resulted in a Risk score for each maintenance activity and as a component was used for TPM program. Findings from this study have been passed on to the meat processing facility for implementation in the entire plant.E

    The GR3 Method for the Stress Analysis of Weldments

    Get PDF
    Determination of the fatigue life of a component requires knowledge of the local maximum fluctuation stress and the through-thickness stress distribution acting at the critical cross-section. This has traditionally been achieved through the use of stress concentration factors. More recently finite element methods have been used to determine the maximum stress acting on a weldment. Unfortunately, meshing large and complicated geometries properly requires the use of fine meshes and can be computationally intensive and time consuming. An alternative method for obtaining maximum stress values using coarse three-dimensional finite element meshes and the hot spot stress concept will be examined in this paper. Coarse mesh stress distributions were found to coincide with fine mesh stress distributions over the inboard 50% of a cross-section. It was also found that the moment generated by stress distribution over the inboard half of the cross-section accounted for roughly 10% of the total moment acting in all of the cases studied. As a result of this, the total moment acting on a cross-section may be predicted using knowledge of the stress distribution over the inboard 50% of a structure. Given the moment acting on a cross-section, the hot spot stress may be found. Using bending and membrane stress concentration factors, the maximum stress value may be found. Finally, given the maximum stress data, the fatigue life of a component may be determined using either the strain-life approach or fatigue crack growth methods.

    Remote asset management for reducing life cycle costs (LCC), risks and enhancing asset performance

    Get PDF
    Remote asset management are faced with additional challenges in monitoring conditions, coordinating logistics for maintenance crew, transport and spare parts for maintenance delivery and asset replacements. Recent trends in technologies, remote performance monitoring and risk-based decision making in Capital Expenditure (CAPEX) and Operations and Maintenance Expenditure (OPEX) decisions for asset management are being embraced by asset intensive industries around the world, where critical assets are located in geographically distributed remote areas or difficult to inspect and maintain locations. Industries are also pushing boundaries by reducing crew size, deferring capital expenditure and overhauling and decision making in inspection and in some cases relaxing Original Equipment Manufacturers (OEM) recommended maintenance schedules. This paper discusses some of the issues and challenges with remote asset management. Illustrative example from heavy haul rail is used to explain reduction in Life Cycle Costs (LCC) and further enhancing operational performance.E

    Neural Network Attributions: A Causal Perspective

    Full text link
    We propose a new attribution method for neural networks developed using first principles of causality (to the best of our knowledge, the first such). The neural network architecture is viewed as a Structural Causal Model, and a methodology to compute the causal effect of each feature on the output is presented. With reasonable assumptions on the causal structure of the input data, we propose algorithms to efficiently compute the causal effects, as well as scale the approach to data with large dimensionality. We also show how this method can be used for recurrent neural networks. We report experimental results on both simulated and real datasets showcasing the promise and usefulness of the proposed algorithm.Comment: 17 pages, 10 Figures. Accepted in the Proceedings of the 36th International Conference on Machine Learning (ICML2019). Modifications: Added github link to code and fixed a typo in Fig.

    Interpretable by Design: Learning Predictors by Composing Interpretable Queries

    Full text link
    There is a growing concern about typically opaque decision-making with high-performance machine learning algorithms. Providing an explanation of the reasoning process in domain-specific terms can be crucial for adoption in risk-sensitive domains such as healthcare. We argue that machine learning algorithms should be interpretable by design and that the language in which these interpretations are expressed should be domain- and task-dependent. Consequently, we base our model's prediction on a family of user-defined and task-specific binary functions of the data, each having a clear interpretation to the end-user. We then minimize the expected number of queries needed for accurate prediction on any given input. As the solution is generally intractable, following prior work, we choose the queries sequentially based on information gain. However, in contrast to previous work, we need not assume the queries are conditionally independent. Instead, we leverage a stochastic generative model (VAE) and an MCMC algorithm (Unadjusted Langevin) to select the most informative query about the input based on previous query-answers. This enables the online determination of a query chain of whatever depth is required to resolve prediction ambiguities. Finally, experiments on vision and NLP tasks demonstrate the efficacy of our approach and its superiority over post-hoc explanations.Comment: 29 pages, 14 figures. Accepted as a Regular Paper in Transactions on Pattern Analysis and Machine Intelligenc

    Variational Information Pursuit for Interpretable Predictions

    Full text link
    There is a growing interest in the machine learning community in developing predictive algorithms that are "interpretable by design". Towards this end, recent work proposes to make interpretable decisions by sequentially asking interpretable queries about data until a prediction can be made with high confidence based on the answers obtained (the history). To promote short query-answer chains, a greedy procedure called Information Pursuit (IP) is used, which adaptively chooses queries in order of information gain. Generative models are employed to learn the distribution of query-answers and labels, which is in turn used to estimate the most informative query. However, learning and inference with a full generative model of the data is often intractable for complex tasks. In this work, we propose Variational Information Pursuit (V-IP), a variational characterization of IP which bypasses the need for learning generative models. V-IP is based on finding a query selection strategy and a classifier that minimizes the expected cross-entropy between true and predicted labels. We then demonstrate that the IP strategy is the optimal solution to this problem. Therefore, instead of learning generative models, we can use our optimal strategy to directly pick the most informative query given any history. We then develop a practical algorithm by defining a finite-dimensional parameterization of our strategy and classifier using deep networks and train them end-to-end using our objective. Empirically, V-IP is 10-100x faster than IP on different Vision and NLP tasks with competitive performance. Moreover, V-IP finds much shorter query chains when compared to reinforcement learning which is typically used in sequential-decision-making problems. Finally, we demonstrate the utility of V-IP on challenging tasks like medical diagnosis where the performance is far superior to the generative modelling approach.Comment: Code is available at https://github.com/ryanchankh/VariationalInformationPursui

    The United States COVID-19 Forecast Hub dataset

    Get PDF
    Academic researchers, government agencies, industry groups, and individuals have produced forecasts at an unprecedented scale during the COVID-19 pandemic. To leverage these forecasts, the United States Centers for Disease Control and Prevention (CDC) partnered with an academic research lab at the University of Massachusetts Amherst to create the US COVID-19 Forecast Hub. Launched in April 2020, the Forecast Hub is a dataset with point and probabilistic forecasts of incident cases, incident hospitalizations, incident deaths, and cumulative deaths due to COVID-19 at county, state, and national, levels in the United States. Included forecasts represent a variety of modeling approaches, data sources, and assumptions regarding the spread of COVID-19. The goal of this dataset is to establish a standardized and comparable set of short-term forecasts from modeling teams. These data can be used to develop ensemble models, communicate forecasts to the public, create visualizations, compare models, and inform policies regarding COVID-19 mitigation. These open-source data are available via download from GitHub, through an online API, and through R packages
    corecore